Patent Retrieval: A Multi-Modal Visual Analytics Approach
نویسندگان
چکیده
Claiming intellectual property for an invention by patents is a common way to protect ideas and technological advancements. However, patents allow only the protection of new ideas. Assessing the novelty of filed patent applications is a very time-consuming, yet crucial manual task. Current patent retrieval systems do not make use of all available data and do not explain the similarity between patents. We support patent officials by an enhanced Visual Analytics multi-modal patent retrieval system. Including various similarity measurements and incorporating user feedback, we are able to achieve significantly better query results than state-of-the-art methods.
منابع مشابه
Image Retrieval: Content versus Context
In this paper, we introduce a new approach to image retrieval. This new approach takes the best from two worlds, combines image features (content) and words from collateral text (context) into one semantic space. Our approach uses Latent Semantic Indexing, a method that uses co-occurrence statistics to uncover hidden semantics. This paper shows how this method, that has proven successful in bot...
متن کاملExploring Multi-Scale Spatiotemporal Twitter User Mobility Patterns with a Visual-Analytics Approach
Understanding human mobility patterns is of great importance for urban planning, traffic management, and even marketing campaign. However, the capability of capturing detailed human movements with fine-grained spatial and temporal granularity is still limited. In this study, we extracted high-resolution mobility data from a collection of over 1.3 billion geo-located Twitter messages. Regarding ...
متن کاملXRCE's Participation at Patent Image Classification and Image-based Patent Retrieval Tasks of the Clef-IP 2011
The aim of this document is to describe the methods we used in the Patent Image Classification and Image-based Patent Retrieval tasks of the Clef-IP 2011 track. The patent image classification task consisted in categorizing patent images into predefined categories such as abstract drawing, graph, flowchart, table, etc. Our main aim in participating in this sub-task was to test how our image cat...
متن کاملMulti-modal Query Expansion Based on Local Analysis for Medical Image Retrieval
A unified medical image retrieval framework integrating visual and text keywords using a novel multi-modal query expansion (QE) is presented. For the content-based image search, visual keywords are modeled using support vector machine (SVM)-based classification of local color and texture patches from image regions. For the text-based search, keywords from the associated annotations are extracte...
متن کاملMLKD's Participation at the CLEF 2011 Photo Annotation and Concept-Based Retrieval Tasks
We participated both in the photo annotation and conceptbased retrieval tasks of CLEF 2011. For the annotation task we developed visual, textual and multi-modal approaches using multi-label learning algorithms from the Mulan open source library. For the visual model we employed the ColorDescriptor software to extract visual features from the images using 7 descriptors and 2 detectors. For each ...
متن کامل